47 research outputs found

    Trace-based Performance Analysis for Hardware Accelerators

    Get PDF
    This thesis presents how performance data from hardware accelerators can be included in event logs. It extends the capabilities of trace-based performance analysis to also monitor and record data from this novel parallelization layer. The increasing awareness to power consumption of computing devices has led to an interest in hybrid computing architectures as well. High-end computers, workstations, and mobile devices start to employ hardware accelerators to offload computationally intense and parallel tasks, while at the same time retaining a highly efficient scalar compute unit for non-parallel tasks. This execution pattern is typically asynchronous so that the scalar unit can resume other work while the hardware accelerator is busy. Performance analysis tools provided by the hardware accelerator vendors cover the situation of one host using one device very well. Yet, they do not address the needs of the high performance computing community. This thesis investigates ways to extend existing methods for recording events from highly parallel applications to also cover scenarios in which hardware accelerators aid these applications. After introducing a generic approach that is suitable for any API based acceleration paradigm, the thesis derives a suggestion for a generic performance API for hardware accelerators and its implementation with NVIDIA CUPTI. In a next step the visualization of event logs containing data from execution streams on different levels of parallelism is discussed. In order to overcome the limitations of classic performance profiles and timeline displays, a graph-based visualization using Parallel Performance Flow Graphs (PPFGs) is introduced. This novel technical approach is using program states in order to display similarities and differences between the potentially very large number of event streams and, thus, enables a fast way to spot load imbalances. The thesis concludes with the in-depth analysis of a case-study of PIConGPU---a highly parallel, multi-hybrid plasma physics simulation---that benefited greatly from the developed performance analysis methods.Diese Dissertation zeigt, wie der Ablauf von Anwendungsteilen, die auf Hardwarebeschleuniger ausgelagert wurden, als Programmspur mit aufgezeichnet werden kann. Damit wird die bekannte Technik der Leistungsanalyse von Anwendungen mittels Programmspuren so erweitert, dass auch diese neue Parallelitätsebene mit erfasst wird. Die Beschränkungen von Computersystemen bezüglich der elektrischen Leistungsaufnahme hat zu einer steigenden Anzahl von hybriden Computerarchitekturen geführt. Sowohl Hochleistungsrechner, aber auch Arbeitsplatzcomputer und mobile Endgeräte nutzen heute Hardwarebeschleuniger um rechenintensive, parallele Programmteile auszulagern und so den skalaren Hauptprozessor zu entlasten und nur für nicht parallele Programmteile zu verwenden. Dieses Ausführungsschema ist typischerweise asynchron: der Skalarprozessor kann, während der Hardwarebeschleuniger rechnet, selbst weiterarbeiten. Die Leistungsanalyse-Werkzeuge der Hersteller von Hardwarebeschleunigern decken den Standardfall (ein Host-System mit einem Hardwarebeschleuniger) sehr gut ab, scheitern aber an einer Unterstützung von hochparallelen Rechnersystemen. Die vorliegende Dissertation untersucht, in wie weit auch multi-hybride Anwendungen die Aktivität von Hardwarebeschleunigern aufzeichnen können. Dazu wird die vorhandene Methode zur Erzeugung von Programmspuren für hochparallele Anwendungen entsprechend erweitert. In dieser Untersuchung wird zuerst eine allgemeine Methodik entwickelt, mit der sich für jede API-gestützte Hardwarebeschleunigung eine Programmspur erstellen lässt. Darauf aufbauend wird eine eigene Programmierschnittstelle entwickelt, die es ermöglicht weitere leistungsrelevante Daten aufzuzeichnen. Die Umsetzung dieser Schnittstelle wird am Beispiel von NVIDIA CUPTI darstellt. Ein weiterer Teil der Arbeit beschäftigt sich mit der Darstellung von Programmspuren, welche Aufzeichnungen von den unterschiedlichen Parallelitätsebenen enthalten. Um die Einschränkungen klassischer Leistungsprofile oder Zeitachsendarstellungen zu überwinden, wird mit den parallelen Programmablaufgraphen (PPFGs) eine neue graphenbasisierte Darstellungsform eingeführt. Dieser neuartige Ansatz zeigt eine Programmspur als eine Folge von Programmzuständen mit gemeinsamen und unterchiedlichen Abläufen. So können divergierendes Programmverhalten und Lastimbalancen deutlich einfacher lokalisiert werden. Die Arbeit schließt mit der detaillierten Analyse von PIConGPU -- einer multi-hybriden Simulation aus der Plasmaphysik --, die in großem Maße von den in dieser Arbeit entwickelten Analysemöglichkeiten profiert hat

    HERMES: Automated software publication with rich metadata

    Get PDF
    Publication of research software is an important step in making software more discoverable. Ideally, rich metadata are published alongside software artifacts to further enable software comprehension, citation, and reuse ("FAIR software"). The provision of these metadata is currently often an arduous manual process, as is its curation. A new project in the Helmholtz Metadata Collaboration, HERMES, aims to automate software publication with rich metadata from individual source code repositories, by retrieving, collating and validating metadata, and depositing them with software artifacts in publication repositories. This talk outlines the concept and implementation of the project and its outcomes

    Software publications with rich metadata: state of the art, automated workflows and HERMES concept

    Get PDF
    To satisfy the principles of FAIR software, software sustainability and software citation, research software must be formally published. Publication repositories make this possible and provide published software versions with unique and persistent identifiers. However, software publication is still a tedious, mostly manual process. To streamline software publication, HERMES, a project funded by the Helmholtz Metadata Collaboration, develops automated workflows to publish research software with rich metadata. The tooling developed by the project utilizes continuous integration solutions to retrieve, collate, and process existing metadata in source repositories, and publish them on publication repositories, including checks against existing metadata requirements. To accompany the tooling and enable researchers to easily reuse it, the project also provides comprehensive documentation and templates for widely used CI solutions. In this paper, we outline the concept for these workflows, and describe how our solution advance the state of the art in research software publication

    teaMPI---replication-based resiliency without the (performance) pain.

    Get PDF
    In an era where we can not afford to checkpoint frequently, replication is a generic way forward to construct numerical simulations that can continue to run even if hardware parts fail. Yet, replication often is not employed on larger scales, as naïvely mirroring a computation once effectively halves the machine size, and as keeping replicated simulations consistent with each other is not trivial. We demonstrate for the ExaHyPE engine—a task-based solver for hyperbolic equation systems—that it is possible to realise resiliency without major code changes on the user side, while we introduce a novel algorithmic idea where replication reduces the time-to-solution. The redundant CPU cycles are not burned “for nothing”. Our work employs a weakly consistent data model where replicas run independently yet inform each other through heartbeat messages whether they are still up and running. Our key performance idea is to let the tasks of the replicated simulations share some of their outcomes, while we shuffle the actual task execution order per replica. This way, replicated ranks can skip some local computations and automatically start to synchronise with each other. Our experiments with a production-level seismic wave-equation solver provide evidence that this novel concept has the potential to make replication affordable for large-scale simulations in high-performance computing

    Multiscale modeling of bacterial colonies: how pili mediate the dynamics of single cells and cellular aggregates

    Full text link
    Neisseria gonorrhoeae is the causative agent of one of the most common sexually transmitted diseases, gonorrhea. Over the past two decades there has been an alarming increase of reported gonorrhea cases where the bacteria were resistant to the most commonly used antibiotics thus prompting for alternative antimicrobial treatment strategies. The crucial step in this and many other bacterial infections is the formation of microcolonies, agglomerates consisting of up to several thousands of cells. The attachment and motility of cells on solid substrates as well as the cell–cell interactions are primarily mediated by type IV pili, long polymeric filaments protruding from the surface of cells. While the crucial role of pili in the assembly of microcolonies has been well recognized, the exact mechanisms of how they govern the formation and dynamics of microcolonies are still poorly understood. Here, we present a computational model of individual cells with explicit pili dynamics, force generation and pili–pili interactions. We employ the model to study a wide range of biological processes, such as the motility of individual cells on a surface, the heterogeneous cell motility within the large cell aggregates, and the merging dynamics and the self-assembly of microcolonies. The results of numerical simulations highlight the central role of pili generated forces in the formation of bacterial colonies and are in agreement with the available experimental observations. The model can quantify the behavior of multicellular bacterial colonies on biologically relevant temporal and spatial scales and can be easily adjusted to include the geometry and pili characteristics of various bacterial species. Ultimately, the combination of the microbiological experimental approach with the in silico model of bacterial colonies might provide new qualitative and quantitative insights on the development of bacterial infections and thus pave the way to new antimicrobial treatments

    HERMES: Automated software publication with rich metadata

    Get PDF
    Publication of research software is an important step in making software more discoverable. Ideally, rich metadata are published alongside software artifacts to further enable software comprehension, citation, and reuse ("FAIR software"). The provision of these metadata is currently often an arduous manual process, as is its curation. A new project in the Helmholtz Metadata Collaboration, HERMES, aims to automate software publication with rich metadata from individual source code repositories, by retrieving, collating and validating metadata, and depositing them with software artifacts in publication repositories. This talk outlines the concept and implementation of the project and its outcomes

    Research software on wings: Automating software publication with rich metadata

    Get PDF
    Publishing your research software in a publication repository is the first step on the path to making your software FAIR! But the publication of just the software itself is not quite enough: To truly enable findability, accessibility and reproducibility, as well as making your software correctly citable and unlock credit for your work, your software publication must come with the rich metadata that support these goals. But where will these metadata come from? And who should compile and publish them? Will RSEs have to become metadata experts as well now? In this talk, we argue that source code repositories and connected platforms often already provide many useful metadata, even if they are distributed over heterogeneous sources. We present an open source software toolchain that will help harvest these metadata, process and collate them, preparing them for submission to publication repositories. This toolchain can be automated via continuous integration platforms, and publish the prepared metadata with or without the respective software artifacts for open and closed source software alike. It can also feed the collated metadata back to source code repositories, or provide them in different formats for further reuse. The talk will outline the concept for the automated publication of research software with rich metadata and describe the current state of the software toolchain that is being developed. It will also detail the CI and publication platforms it will initially be available for, additional resources such as documentation and training materials, and give an outlook on sustainability and future development

    Automated FAIR4RS software publication with HERMES

    Get PDF
    Software as an important method and output of research should follow the RDA "FAIR for Research Software Principles". In practice, this means that research software, whether open, inner or closed source, should be published with rich metadata to enable FAIR4RS. For research software practitioners, this currently often means following an arduous and mostly manual process of software publication. HERMES, a project funded by the Helmholtz Metadata Collaboration, aims to alleviate this situation. We develop configurable, executable workflows for the publication of rich metadata for research software, alongside the software itself. These workflows follow a push-based approach: they use existing continuous integration solutions, integrated in common code platforms such as GitHub or GitLab, to harvest, unify and collate software metadata from source code repositories and code platform APIs. They also manage curation of unified metadata, and deposits on publication platforms. These deposits are based on deposition requirements and curation steps defined by a targeted publication platform, the depositing institution, or a software management plan. In addition, the HERMES project works to make the widely-used publication platforms InvenioRDM and Dataverse "research software-ready", i.e., able to ingest software publications with rich metadata, and represent software publications and metadata in a way that supports findability, assessability and accessibility of the published software versions. Beyond the open source workflow software, HERMES will openly provide templates for different continuous integration solutions, extensive documentation, and training material. Thus, researchers are enabled to adapt automated software publication quickly and easily. Our poster presents a high-level overview of the HERMES concept, its status and an outlook

    Project HERMES: Automated FAIR4RS software publication with HERMES

    Get PDF
    Software as an important method and output of research should follow the RDA "FAIR for Research Software Principles". In practice, this means that research software, whether open, inner or closed source, should be published with rich metadata to enable FAIR4RS. For research software practitioners, this currently often means following an arduous and mostly manual process of software publication. HERMES, a project funded by the Helmholtz Metadata Collaboration, aims to alleviate this situation. We develop configurable, executable workflows for the publication of rich metadata for research software, alongside the software itself. These workflows follow a push-based approach: they use existing continuous integration solutions, integrated in common code platforms such as GitHub or GitLab, to harvest, unify and collate software metadata from source code repositories and code platform APIs. They also manage curation of unified metadata, and deposits on publication platforms. These deposits are based on deposition requirements and curation steps defined by a targeted publication platform, the depositing institution, or a software management plan. In addition, the HERMES project works to make the widely-used publication platforms InvenioRDM and Dataverse "research software-ready", i.e., able to ingest software publications with rich metadata, and represent software publications and metadata in a way that supports findability, assessability and accessibility of the published software versions. Beyond the open source workflow software, HERMES will openly provide templates for different continuous integration solutions, extensive documentation, and training material. Thus, researchers are enabled to adapt automated software publication quickly and easily. In this presentation, we provide an overview of the project aims, its current status, and an outlook on future development

    Automated FAIR4RS: Software publication with HERMES

    Get PDF
    Software as an important method and output of research should follow the RDA "FAIR for Research Software Principles". In practice, this means that research software, whether open, inner or closed source, should be published with rich metadata to enable FAIR4RS. For research software practitioners, this currently often means to follow an arduous and mostly manual process of software publication. HERMES (https://software-metadata.pub), a project funded by the Helmholtz Metadata Collaboration, aims to alleviate this situation by developing configurable, executable workflows for the publication of rich metadata for research software, alongside the software itself. These workflows, following a push-based approach, use existing continuous integration solutions, integrated in widespread code platforms like GitHub or GitLab, to harvest, to unify and collate software metadata that already exists in source code repositories and via code platform APIs. They include curation processes for the unified metadata and take care of the actual deposit on publication platforms, based on deposition requirements and curation steps defined by a targeted publication platform, the depositing institution, or a software management plan. In addition, the HERMES project works to make the widely-used publication platforms InvenioRDM and Dataverse "research software ready", i.e., able to ingest software publications with rich metadata, and represent software publications and their respective metadata in a usable manner that supports findability, assessability and accessibility of the published software versions. Beyond the open source workflow software, HERMES will openly provide templates for different continuous integration solutions, extensive documentation, and training material. Thus, researchers are enabled to adapt automated software publication quickly and easily
    corecore